Error Bounds for Approximation with Neural Networks
نویسندگان
چکیده
In this paper we prove convergence rates for the problem of approximating functions f by neural networks and similar constructions. We show that the rates are the better the smoother the activation functions are, provided that f satisses an integral representation. We give error bounds not only in Hilbert spaces but in general Sobolev spaces W m;r ((). Finally, we apply our results to a class of perceptrons and present a suucient smoothness condition on f guaranteeing the integral representation.
منابع مشابه
Uniform Approximation and Thecomplexity of Neural
This work studies some of the approximating properties of feedforward neural networks as a function of the number of nodes. Two cases are considered: sigmoidal and radial basis function networks. Bounds for the approximation error are given. The methods through which we arrive at the bounds are constructive. The error studied is the L1 or sup error.
متن کاملAlmost Linear VC Dimension Bounds for Piecewise Polynomial Networks
We compute upper and lower bounds on the VC dimension and pseudo-dimension of feedforward neural networks composed of piecewise polynomial activation functions. We show that if the number of layers is fixed, then the VC dimension and pseudo-dimension grow as WlogW, where W is the number of parameters in the network. This result stands in opposition to the case where the number of layers is unbo...
متن کاملUniform Approximation by Neural Networks Activated by First and Second Order Ridge Splines
We establish sup-norm error bounds for functions that are approximated by linear combinations of first and second order ridge splines and show that these bounds are near-optimal.
متن کاملAn Integral Upper Bound for Neural Network Approximation
Complexity of one-hidden-layer networks is studied using tools from nonlinear approximation and integration theory. For functions with suitable integral representations in the form of networks with infinitely many hidden units, upper bounds are derived on the speed of decrease of approximation error as the number of network units increases. These bounds are obtained for various norms using the ...
متن کاملOn the optimality of neural-network approximation using incremental algorithms
The problem of approximating functions by neural networks using incremental algorithms is studied. For functions belonging to a rather general class, characterized by certain smoothness properties with respect to the L2 norm, we compute upper bounds on the approximation error where error is measured by the Lq norm, 1< or =q< or =infinity. These results extend previous work, applicable in the ca...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Journal of Approximation Theory
دوره 112 شماره
صفحات -
تاریخ انتشار 2001